86 research outputs found

    Combining face detection and novelty to identify important events in a visual lifelog

    Get PDF
    The SenseCam is a passively capturing wearable camera, worn around the neck and takes an average of almost 2,000 images per day, which equates to over 650,000 images per year. It is used to create a personal lifelog or visual recording of the wearer’s life and generates information which can be helpful as a human memory aid. For such a large amount of visual information to be of any use, it is accepted that it should be structured into “events”, of which there are about 8,000 in a wearer’s average year. In automatically segmenting SenseCam images into events, it is desirable to automatically emphasise more important events and decrease the emphasis on mundane/routine events. This paper introduces the concept of novelty to help determine the importance of events in a lifelog. By combining novelty with face-to-face conversation detection, our system improves on previous approaches. In our experiments we use a large set of lifelog images, a total of 288,479 images collected by 6 users over a time period of one month each

    Organising a large quantity of lifelog images

    Get PDF
    Preliminary research indicates that a visual recording of one’s activities may be beneficial for sufferers of neurodegenerative diseases. However there exists a number of challenges in managing the vast quantities of data generated by lifelogging devices such as the SenseCam. Our work concentrates on the following areas within visual lifelogging: Segmenting sequences of images into events (e.g. breakfast, at meeting); retrieving similar events (“what other times was I at the park?”); determining most important events (meeting an old friend is more important than breakfast); selection of ideal keyframe to provide an event summary; and augmenting lifeLog events with images taken by millions of users from ‘Web 2.0’ websites (“show me other pictures of the Statue of Liberty to augment my own lifelog images”)

    Intelligent image processing techniques for structuring a visual diary

    Get PDF
    The SenseCam is a small wearable personal device which automatically captures up to 3,500 images per day. This yields a very large personal collection of images or in a sense, a diary of a person's day. Over one million images will need to be stored each year, therefore intelligent techniques are necessary for the effective searching and browsing of this image collection for important or significant events in a person's life, and one of the issues is how to detect and then relate similar events in a lifetime. This is necessary in order to detect unusual or once-off events, as well as determining routine activities. This poster will present the various sources of data that can be collected with a SenseCam device, and also other sources that can be collected to compliment the SenseCam data sources. Different forms of image processing that can be carried out on this large set of images will be detailed, specifically how to detect what images belong to individual events, and also how similar various events are to each other. There will be hundreds of thousands of images of everyday routines; as a result more memorable events are quite likely to be significantly different to other normal reoccurring events

    Structuring and augmenting a visual personal diary

    Get PDF
    This paper refers to research in the domain of visual lifelogging, whereby individuals capture much of their lives using digital cameras. The potential benefits of lifelogging include: applications to review tourist trips, memory aid applications, learning assistants, etc. The SenseCam, developed by Microsoft Research in Cambridge, UK, is a small wearable device which incorporates a digital camera and onboard sensors (motion, ambient temperature, light level, and passive infrared to detect presence of people). There exists a number of challenges in managing the vast quantities of data generated by lifelogging devices such as the SenseCam. Our work concentrates on the following areas withing visual lifelogging: Segmenting sequences of images into events (e.g. breakfast, at meeting); retrieving similar events (what other times was I at the park?); determining most important events (meeting an old friend is more important than breakfast); selection of ideal keyframe to provide an event summary; and augmenting lifeLog events with images taken by millions of users from "Web 2.0" websites (show me other pictures of the Statue of Liberty to augment my own lifelog images)

    AAL research. Creating the future technologies in Ireland – challenges and realities

    Get PDF
    This talk discusses the broad range of AAL-related research projects that CLARITY are involved in, and shows that multi-disciplinary collaboration is necessary to best advance the uptake of AAL technologies

    Providing effective memory retrieval cues through automatic structuring and augmentation of a lifelog of images

    Get PDF
    Lifelogging is an area of research which is concerned with the capture of many aspects of an individual's life digitally, and within this rapidly emerging field is the significant challenge of managing images passively captured by an individual of their daily life. Possible applications vary from helping those with neurodegenerative conditions recall events from memory, to the maintenance and augmentation of extensive image collections of a tourist's trips. However, a large lifelog of images can quickly amass, with an average of 700,000 images captured each year, using a device such as the SenseCam. We address the problem of managing this vast collection of personal images by investigating automatic techniques that: 1. Identify distinct events within a full day of lifelog images (which typically consists of 2,000 images) e.g. breakfast, working on PC, meeting, etc. 2. Find similar events to a given event in a person's lifelog e.g. "show me other events where I was in the park" 3. Determine those events that are more important or unusual to the user and also select a relevant keyframe image for visual display of an event e.g. a "meeting" is more interesting to review than "working on PC" 4. Augment the images from a wearable camera with higher quality images from external "Web 2.0" sources e.g. find me pictures taken by others of the U2 concert in Croke Park In this dissertation we discuss novel techniques to realise each of these facets and how effective they are. The significance of this work is not only of benefit to the lifelogging community, but also to cognitive psychology researchers studying the potential benefits of lifelogging devices to those with neurodegenerative diseases

    Using the SenseCam hands-on!

    Get PDF
    Using the Microsoft Research SenseCam to measure physical activity and sedentary behavior Dr Doherty will give a hands-on demonstration and workshop of using the Microsoft Research SenseCam, a wearable sensor that can be used to identify and contextualize bouts of active travel, physical activity, and sedentary behavior. This workshop will be of interest to physical activity researchers and behavioral scientists interested in the measurement of physical activity and sedentary behaviour, as well as computer scientists interested in wearable sensors and image processing challenges

    Using wearable image sensing to measure physical activity & sedentary behavior

    Get PDF
    This presentation will be of interest to 1) physical activity researchers interested in better capturing participant physical activity and sedentary behaviour, and 2) computer science researchers interested in wearable sensors and image processing challenges in a new application area. In this lecture, Dr Doherty will discuss use of the Microsoft Research SenseCam, a wearable camera sensor which automatically captures up to 5,000 first-person point-of-view images. The images strongly help identify the type of activity a participant is involved in, and can also help determine the environment and situation surrounding the given activity. Because up to 35,000 images can be captured from a participant each week, there exists a substantial information management challenge in storing, annotating, and retrieving image content. This talk will discuss current state-of-art computational approaches applied to SenseCam images, and their application in the field of physical activity

    Automatically detecting important moments from everyday life using a mobile device

    Get PDF
    This paper proposes a new method to detect important moments in our lives. Our work is motivated by the increase in the quantity of multimedia data, such as videos and photos, which are capturing life experiences into personal archives. Even though such media-rich data suggests visual processing to identify important moments, the oft-mentioned problem of the semantic gap means that users cannot automatically identify or retrieve important moments using visual processing techniques alone. Our approach utilises on-board sensors from mobile devices to automatically identify important moments, as they are happening
    • 

    corecore